Chapters Tool
Use the Chapters tool to enable AI-powered workflows to generate chapters for video and audio content. When you connect this tool to an AI node, the AI model can dynamically divide content into logical, timestamped segments based on user requests and workflow context.
In this article, you learn how to add, configure, and test the Chapters tool in your workflow.
Concept
The Chapters tool is a tool node that provides AI models with the ability to generate structured chapters from video and audio content in your VIDIZMO library. Unlike regular workflow nodes that execute in sequence, this tool is invoked by an AI node only when the model determines that chapter generation is needed.
Key capabilities:
- Automatic chapter generation - Generate timestamped chapters from video and audio content using an internal LLM call
- Structured output - Produce chapters with timestamps, titles, and descriptions parsed into structured chapter objects
- Configurable quality - Control chapter count, language, and generation behavior to match your use case
- Multi-language support - Generate chapters in different languages using culture codes
Understand How The Tool Works
This section explains how the Chapters tool operates within a workflow and how it connects to other nodes.
Tool Connectors
In the Workflow Designer, nodes use colored connectors to indicate the type of connection and data flow. The Chapters tool uses the green connector, which is specific to tool nodes.
Execution Flow
When a workflow runs, the Chapters tool operates in this sequence:
- The AI node receives user input from the chatbot.
- The AI model analyzes the input and determines whether chapter generation is needed.
- If chapter generation is needed, the AI invokes the Chapters tool through the green connector.
- The tool reads content from the specified mashup IDs or a state variable, and generates chapter markdown with timestamps using an internal LLM call.
- The generated chapters are parsed into structured chapter objects and stored in the state key specified in Output Variable.
- The AI node accesses the chapters and composes a response for the user.
- Use the Publish Mashup tool with
mode="chapters"to publish the generated chapters to the mashup.
┌─────────────┐ ┌─────────────┐ ┌──────────────────┐
│ User Query │ ───► │ AI Node │ ───► │ Chapters │
└─────────────┘ └─────────────┘ │ (Green Connector)│
▲ └──────────────────┘
│ │
└── state.data.chapters ◄┘
When To Use This Tool
Use the Chapters tool when your workflow requires:
- AI-driven chapter generation for long-form video or audio content
- Automatic segmentation of recordings into logical, navigable sections
- Timestamped chapter markers to improve content discoverability and navigation
- Structured chapter data that can be published to mashups using the Publish Mashup tool
Add The Chapters Tool To Your Workflow
Follow these steps to add the Chapters tool to your workflow canvas.
-
Go to Portal Settings > Chatbot > Workflow.
-
Select an existing workflow or create a new workflow.
-
In the Node Library, expand the Tools category.
-
Drag Chapters Tool onto the canvas.
Connect The Tool To An AI Node
After you add the Chapters tool to the canvas, connect it to an AI node.
-
Locate your AI node (such as an LLM node) on the canvas.
-
Drag a connection line from the green connector to the input connector on the Chapters tool node.
-
Release to create the connection. A green connector (●) indicates a successful tool connection.
NOTE: The green connector indicates that the tool is available to the AI node for on-demand invocation. The tool doesn't execute in sequence with other nodes, it executes only when the AI model decides to invoke it.
Configure The Chapters Tool
Select the Chapters node to open the Node Configuration Panel. You can configure the following options:
Description
Instructions for the LLM on how and when to use this tool. The default description provides guidance including:
- When to invoke chapter generation (user asks for chapters, content segmentation, or navigation markers)
- How to read content from mashup IDs or state variables
- Expected output format with timestamps and chapter titles
- How to publish chapters using the Publish Mashup tool afterwards
The AI uses this description to determine when chapter generation is appropriate. For example, when a user asks "Generate chapters for this training video," the AI reads the description to understand it should invoke the Chapters tool with the video's mashup ID.
TIP: Keep the default description unless you need to customize the AI's chapter generation behavior for specific use cases.
Chapter Generation Parameters
You can configure the following options:
-
IDs: The IDs of mashups to generate chapters for. The AI typically populates this from search results, user-selected content, or conversation context. Enter a fixed ID for testing, or use
${state.data.mashup_id}to reference a dynamically determined ID. -
Input Variable: The state key to read previously generated text from. Use this when content text has already been retrieved by another tool or node and is stored in state. For example, use
${state.data.content_text}to reference transcript text retrieved by the Read Content tool. -
Output Variable: The state key to store the parsed chapters under. Default is
chapters. The AI node and subsequent workflow nodes access results using${state.data.<variable_name>}. For example, with the default value, access chapters as${state.data.chapters}. -
Culture: The culture or language code for the generated chapters. Default is
en-us. Use standard culture codes such asde-defor German orfr-frfor French. Supports${variable}syntax for dynamic values.
All chapter generation parameters support Fixed (static value) and Expression (dynamic value using ${variable} syntax) input modes.
Model Settings
These optional settings let you override the default AI model used for chapter generation.
-
System Prompt: Instructions that guide how the AI generates chapters. The default prompt instructs the model to produce chapters in the format
[HH:MM:SS] — Chapter Titlewith brief descriptions. Customize this when you need chapters in a specific format, depth, or focus area. Leave empty to use the default chapters prompt. -
Model Provider: The AI provider to use for generating chapters. When left empty, the tool uses the default model configured for the workflow. Specify a provider when you need a particular model for chapter generation quality or cost reasons.
-
Model ID: The specific model identifier from the selected provider. Required if Model Provider is set. When left empty, the tool uses the provider's default model.
-
Temperature: Controls the randomness of the generated chapters. Lower values produce more focused and deterministic output. Higher values produce more creative and varied chapter titles and descriptions. Default is
0.3. -
Max Token Limit: The maximum context window size in tokens. Overrides the default model configuration if set. Use this to control token consumption for large content.
-
Reasoning: Enable reasoning or thinking mode for the internal chapter generation LLM. When enabled, the model uses extended reasoning to produce more accurate chapter boundaries and titles. Supports Fixed and Expression input modes.
Advanced Settings
-
Min Chapters: The minimum number of chapters required. Chapter generation fails if fewer chapters are produced than this threshold. Default is
2. -
Logging Mode: Controls logging verbosity for tracing providers. Select All for full logging or None to skip logging for this node.
-
Wait For All Edges: Controls execution timing for nodes with multiple incoming edges. When set to True (default), the node waits for all incoming direct edges before executing once. Use this for parallel fan-in scenarios when all branches must complete. When set to False, the node executes each time any incoming edge arrives. Use this for conditional convergence such as if/switch/loop branches. Supports Fixed and Expression input modes.
Retry Configuration
Configure automatic retry behavior if the node fails during execution.
-
Max Attempts: The maximum number of retry attempts if the node fails. Range is 0 to 10.
-
Wait Time (ms): The wait time between retry attempts in milliseconds. Range is 0 to 60,000 ms.
-
Retry On Errors: Specify which error types should trigger a retry. Leave empty to retry on all errors.
Test The Configuration
After you configure the Chapters tool, test the workflow to verify correct behavior.
-
Go to Portal Settings > Chatbot > Agents.
-
Select the agent associated with your workflow, or create a new agent and assign your workflow.
-
Open the chatbot interface in the portal.
-
Enter a query that should trigger chapter generation. For example:
- "Generate chapters for this training video"
- "Create chapter markers for the onboarding recording"
- "Divide this presentation into chapters"
-
Verify that the agent returns structured chapters with timestamps, titles, and descriptions.
-
Check that the number of chapters meets the configured Min Chapters threshold.
-
Use the Publish Mashup tool with
mode="chapters"to publish the chapters to the mashup and verify they appear in the player. -
If results don't match expectations, return to the Workflow Designer and adjust your configuration.
Best Practices
- Pair the Chapters tool with the Read Content tool or Search Mashup tool so the AI can first retrieve content and then generate chapters in a single interaction.
- Always connect a Publish Mashup tool (with
mode="chapters") in your workflow so the AI can publish generated chapters to the mashup after generation. - Customize the System Prompt to control chapter format and granularity. For example, instruct the model to create more granular chapters for training content or broader chapters for general presentations.
- Adjust Temperature based on your needs. Use lower values for consistent, predictable chapter titles and higher values for more creative segmentation.
- Set Min Chapters appropriately for your content type. Training videos may benefit from more chapters, while short recordings may need fewer.
- Enable Reasoning mode for complex content where the model needs extended thinking to identify accurate chapter boundaries.
Related Articles
- Tools in VIDIZMO Intelligence Hub
- Publish Mashup tool
- Read Content tool
- Search Mashup tool
- Workflow Designer
- Variables
- Nodes